首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   36334篇
  免费   2056篇
  国内免费   3107篇
系统科学   4468篇
丛书文集   893篇
教育与普及   112篇
理论与方法论   74篇
现状及发展   443篇
综合类   35502篇
自然研究   5篇
  2024年   90篇
  2023年   413篇
  2022年   597篇
  2021年   711篇
  2020年   751篇
  2019年   614篇
  2018年   644篇
  2017年   773篇
  2016年   820篇
  2015年   1185篇
  2014年   1769篇
  2013年   1556篇
  2012年   2293篇
  2011年   2305篇
  2010年   1749篇
  2009年   2131篇
  2008年   1967篇
  2007年   2710篇
  2006年   2477篇
  2005年   2214篇
  2004年   1967篇
  2003年   1734篇
  2002年   1480篇
  2001年   1207篇
  2000年   1111篇
  1999年   1001篇
  1998年   751篇
  1997年   746篇
  1996年   625篇
  1995年   519篇
  1994年   472篇
  1993年   419篇
  1992年   363篇
  1991年   335篇
  1990年   324篇
  1989年   224篇
  1988年   200篇
  1987年   122篇
  1986年   63篇
  1985年   24篇
  1984年   11篇
  1983年   2篇
  1982年   2篇
  1981年   19篇
  1955年   7篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
91.
基于 2003年—2016年我国资源衰退型城市产业结构调整与环境相关数据,测度了资源衰退型城市产业结构调整与工业三废减排的耦合协调度,并进一步应用面板VAR 模型对产业结构调整与环境污染排放的动态互动关系进行定量分析.研究结果表明,目前我国资源衰退型城市产业结构调整与污染减排的耦合协调度仍处于濒临失调状态.脉冲响应图显示,产业结构高级化促进工业废水、工业二氧化硫的减排具有长期效应,工业废水减排有助于倒逼产业结构高级化.我国资源衰退型城市转型发展,应进一步推动产业结构的调整,大力发展第三产业与高新技术产业,加快经济结构优化升级,以产业结构的高级化推动城市的绿色高质量发展.  相似文献   
92.
针对当前用户画像工作中各模态信息不能被充分利用的问题, 提出一种跨模态学习思想, 设计一种基于多模态融合的用户画像模型。首先利用 Stacking集成方法, 融合多种跨模态学习联合表示网络, 对相应的模型组合进行学习, 然后引入注意力机制, 使得模型能够学习不同模态的表示对预测结果的贡献差异性。改进后的模型具有精心设计的网络结构和目标函数, 能够生成一个由特征级融合和决策级融合组成的联合特征表示, 从而可以合并不同模态的相关特征。在真实数据集上的实验结果表明, 所提模型优于当前最好的基线方法。  相似文献   
93.
很多学者用“全球恐怖主义研究数据库”GTD数据集,采用博弈论、K近邻法和支持向量机等分析恐怖事件的聚集性,已经取得一些成果.但在前期研究中未有很好考虑数据的稀疏性以及高维度多冗余等会导致聚集分类准确率不高的问题.本文提出一种基于最小冗余最大相关与因子分解机结合的TFM分类模型,使用增量搜索方法寻找近似最优的特征解决高维度多冗余问题和FM方法解决数据稀疏问题,并对预处理后的恐怖袭击事件数据用TFM模型做量化分类.文中使用朴素贝叶斯NB、支持向量机SVM、逻辑回归LR与TFM等4个模型的“马修斯相关系数”MCC进行比较,结果显示TFM的MCC相对于其他三个模型NB、SVM、LR分别提高了49.9%,2.5%,2.3%,可见TFM模型有一定可行性.  相似文献   
94.
Projections of future climate change cannot rely on a single model. It has become common to rely on multiple simulations generated by Multi-Model Ensembles (MMEs), especially to quantify the uncertainty about what would constitute an adequate model structure. But, as Parker points out (2018), one of the remaining philosophically interesting questions is: “How can ensemble studies be designed so that they probe uncertainty in desired ways?” This paper offers two interpretations of what General Circulation Models (GCMs) are and how MMEs made of GCMs should be designed. In the first interpretation, models are combinations of modules and parameterisations; an MME is obtained by “plugging and playing” with interchangeable modules and parameterisations. In the second interpretation, models are aggregations of expert judgements that result from a history of epistemic decisions made by scientists about the choice of representations; an MME is a sampling of expert judgements from modelling teams. We argue that, while the two interpretations involve distinct domains from philosophy of science and social epistemology, they both could be used in a complementary manner in order to explore ways of designing better MMEs.  相似文献   
95.
In this paper, we assess the predictive content of latent economic policy uncertainty and data surprise factors for forecasting and nowcasting gross domestic product (GDP) using factor-type econometric models. Our analysis focuses on five emerging market economies: Brazil, Indonesia, Mexico, South Africa, and Turkey; and we carry out a forecasting horse race in which predictions from various different models are compared. These models may (or may not) contain latent uncertainty and surprise factors constructed using both local and global economic datasets. The set of models that we examine in our experiments includes both simple benchmark linear econometric models as well as dynamic factor models that are estimated using a variety of frequentist and Bayesian data shrinkage methods based on the least absolute shrinkage operator (LASSO). We find that the inclusion of our new uncertainty and surprise factors leads to superior predictions of GDP growth, particularly when these latent factors are constructed using Bayesian variants of the LASSO. Overall, our findings point to the importance of spillover effects from global uncertainty and data surprises, when predicting GDP growth in emerging market economies.  相似文献   
96.
This paper presents a new spatial dependence model with an adjustment of feature difference. The model accounts for the spatial autocorrelation in both the outcome variables and residuals. The feature difference adjustment in the model helps to emphasize feature changes across neighboring units, while suppressing unobserved covariates that are present in the same neighborhood. The prediction at a given unit incorporates components that depend on the differences between the values of its main features and those of its neighboring units. In contrast to conventional spatial regression models, our model does not require a comprehensive list of global covariates necessary to estimate the outcome variable at the unit, as common macro-level covariates are differenced away in the regression analysis. Using the real estate market data in Hong Kong, we applied Gibbs sampling to determine the posterior distribution of each model parameter. The result of our empirical analysis confirms that the adjustment of feature difference with an inclusion of the spatial error autocorrelation produces better out-of-sample prediction performance than other conventional spatial dependence models. In addition, our empirical analysis can identify components with more significant contributions.  相似文献   
97.
We consider finite state-space non-homogeneous hidden Markov models for forecasting univariate time series. Given a set of predictors, the time series are modeled via predictive regressions with state-dependent coefficients and time-varying transition probabilities that depend on the predictors via a logistic/multinomial function. In a hidden Markov setting, inference for logistic regression coefficients becomes complicated and in some cases impossible due to convergence issues. In this paper, we aim to address this problem utilizing the recently proposed Pólya-Gamma latent variable scheme. Also, we allow for model uncertainty regarding the predictors that affect the series both linearly — in the mean — and non-linearly — in the transition matrix. Predictor selection and inference on the model parameters are based on an automatic Markov chain Monte Carlo scheme with reversible jump steps. Hence the proposed methodology can be used as a black box for predicting time series. Using simulation experiments, we illustrate the performance of our algorithm in various setups, in terms of mixing properties, model selection and predictive ability. An empirical study on realized volatility data shows that our methodology gives improved forecasts compared to benchmark models.  相似文献   
98.
The ability to improve out-of-sample forecasting performance by combining forecasts is well established in the literature. This paper advances this literature in the area of multivariate volatility forecasts by developing two combination weighting schemes that exploit volatility persistence to emphasise certain losses within the combination estimation period. A comprehensive empirical analysis of the out-of-sample forecast performance across varying dimensions, loss functions, sub-samples and forecast horizons show that new approaches significantly outperform their counterparts in terms of statistical accuracy. Within the financial applications considered, significant benefits from combination forecasts relative to the individual candidate models are observed. Although the more sophisticated combination approaches consistently rank higher relative to the equally weighted approach, their performance is statistically indistinguishable given the relatively low power of these loss functions. Finally, within the applications, further analysis highlights how combination forecasts dramatically reduce the variability in the parameter of interest, namely the portfolio weight or beta.  相似文献   
99.
We use dynamic factors and neural network models to identify current and past states (instead of future) of the US business cycle. In the first step, we reduce noise in data by using a moving average filter. Dynamic factors are then extracted from a large-scale data set consisted of more than 100 variables. In the last step, these dynamic factors are fed into the neural network model for predicting business cycle regimes. We show that our proposed method follows US business cycle regimes quite accurately in-sample and out-of-sample without taking account of the historical data availability. Our results also indicate that noise reduction is an important step for business cycle prediction. Furthermore, using pseudo real time and vintage data, we show that our neural network model identifies turning points quite accurately and very quickly in real time.  相似文献   
100.
This paper is concerned with model averaging estimation for conditional volatility models. Given a set of candidate models with different functional forms, we propose a model averaging estimator and forecast for conditional volatility, and construct the corresponding weight-choosing criterion. Under some regulatory conditions, we show that the weight selected by the criterion asymptotically minimizes the true Kullback–Leibler divergence, which is the distributional approximation error, as well as the Itakura–Saito distance, which is the distance between the true and estimated or forecast conditional volatility. Monte Carlo experiments support our newly proposed method. As for the empirical applications of our method, we investigate a total of nine major stock market indices and make a 1-day-ahead volatility forecast for each data set. Empirical results show that the model averaging forecast achieves the highest accuracy in terms of all types of loss functions in most cases, which captures the movement of the unknown true conditional volatility.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号